强化学习(RL)旨在在给定环境中从奖励功能中训练代理商,但逆增强学习(IRL)试图从观察专家的行为中恢复奖励功能。众所周知,总的来说,各种奖励功能会导致相同的最佳政策,因此,IRL定义不明。但是,(Cao等,2021)表明,如果我们观察到两个或多个具有不同折现因子或在不同环境中起作用的专家,则可以在某些条件下确定奖励功能,直至常数。这项工作首先根据等级条件显示了表格MDP的多位专家的等效可识别性声明,该声明易于验证,也被证明是必要的。然后,我们将结果扩展到各种不同的方案,即,在奖励函数可以表示为给定特征的线性组合,使其更容易解释,或者当我们可以访问近似过渡矩阵时,我们会表征奖励可识别性。即使奖励无法识别,我们也提供了特征的条件,当给定环境中的多个专家的数据允许在新环境中概括和训练最佳代理。在各种数值实验中,我们对奖励可识别性和概括性的理论结果得到了验证。
translated by 谷歌翻译
这项工作开发了具有严格效率的新算法,可确保无限的地平线模仿学习(IL)具有线性函数近似而无需限制性相干假设。我们从问题的最小值开始,然后概述如何从优化中利用经典工具,尤其是近端点方法(PPM)和双平滑性,分别用于在线和离线IL。多亏了PPM,我们避免了在以前的文献中出现在线IL的嵌套政策评估和成本更新。特别是,我们通过优化单个凸的优化和在成本和Q函数上的平稳目标来消除常规交替更新。当不确定地解决时,我们将优化错误与恢复策略的次级优势联系起来。作为额外的奖励,通过将PPM重新解释为双重平滑以专家政策为中心,我们还获得了一个离线IL IL算法,该算法在所需的专家轨迹方面享有理论保证。最后,我们实现了线性和神经网络功能近似的令人信服的经验性能。
translated by 谷歌翻译
本文提供了一项理论研究,该研究对在线环境下的$ \ epsilon $ - 梅迪探索中的增强学习(RL)中的深神经功能近似(RL)提供了研究。这种问题设置是由属于该制度的成功深Q-Networks(DQN)框架所激发的。在这项工作中,我们从函数类别和神经网络体系结构(例如,宽度和深度)的角度从“线性”制度之外的函数类别和神经网络体系结构(例如宽度和深度)提供了对理论理解的初步尝试。具体来说,我们将重点放在基于价值的算法上,分别通过BESOV(和Barron)功能空间赋予的深层(和两层)神经网络,以$ \ epsilon $ greedy探索,旨在近似于$ \ alpha $ -Smooth Q功能在$ d $二维功能空间中。我们证明,使用$ t $情节,缩放宽度$ m = \ widetilde {\ mathcal {o}}}(t^{\ frac {d} {2 \ alpha + d}})$和depth $ l = \ Mathcal {O}(\ log t)for Deep RL的神经网络的$足以在Besov空间中以sublinear的遗憾学习。此外,对于由Barron空间赋予的两层神经网络,缩放宽度$ \ omega(\ sqrt {t})$就足够了。为了实现这一目标,我们分析中的关键问题是如何估计深神经功能近似下的时间差异误差,因为$ \ epsilon $ - 否则探索不足以确保“乐观”。我们的分析重新制定了$ l^2(\ mathrm {d} \ mu)$ - 在某个平均度量$ \ mu $上的可集成空间,并将其转换为非IID设置下的概括问题。这可能对RL理论具有自身的兴趣,以便更好地理解Deep RL中的$ \ Epsilon $ -Greedy Exploration。
translated by 谷歌翻译
我们在专家和学习者之间的过渡动力学下研究了逆钢筋学习(IRL)问题。具体而言,我们考虑最大因果熵(MCE)IRL学习者模型,并根据专家和学习者的转换动态之间的$ \ ell_1 $ -disce提供学习者的性能下降的紧密上限。利用强大的RL文献的洞察力,我们提出了一种强大的MCE IRL算法,这是一种有效的方法来帮助这种不匹配。最后,我们经验展示了我们算法的稳定性能,而在有限和连续的MDP问题中的转换动态不匹配下的标准MCE IRL算法相比。
translated by 谷歌翻译
Graph Neural Networks (GNNs) achieve state-of-the-art performance on graph-structured data across numerous domains. Their underlying ability to represent nodes as summaries of their vicinities has proven effective for homophilous graphs in particular, in which same-type nodes tend to connect. On heterophilous graphs, in which different-type nodes are likely connected, GNNs perform less consistently, as neighborhood information might be less representative or even misleading. On the other hand, GNN performance is not inferior on all heterophilous graphs, and there is a lack of understanding of what other graph properties affect GNN performance. In this work, we highlight the limitations of the widely used homophily ratio and the recent Cross-Class Neighborhood Similarity (CCNS) metric in estimating GNN performance. To overcome these limitations, we introduce 2-hop Neighbor Class Similarity (2NCS), a new quantitative graph structural property that correlates with GNN performance more strongly and consistently than alternative metrics. 2NCS considers two-hop neighborhoods as a theoretically derived consequence of the two-step label propagation process governing GCN's training-inference process. Experiments on one synthetic and eight real-world graph datasets confirm consistent improvements over existing metrics in estimating the accuracy of GCN- and GAT-based architectures on the node classification task.
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译
Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, the notion of generalisation should include both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. However, previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation. We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show improvements over strong baselines in generalisation to unseen regions and novel sounding objects, within the Habitat-Matterport3D simulation environment, under the SoundSpaces task.
translated by 谷歌翻译
Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be available a priori and would need to be retrieved based on an information need, a setting we call open-domain MDS. We experiment with current state-of-the-art retrieval and summarization models on several popular MDS datasets extended to the open-domain setting. We find that existing summarizers suffer large reductions in performance when applied as-is to this more realistic task, though training summarizers with retrieved inputs can reduce their sensitivity retrieval errors. To further probe these findings, we conduct perturbation experiments on summarizer inputs to study the impact of different types of document retrieval errors. Based on our results, we provide practical guidelines to help facilitate a shift to open-domain MDS. We release our code and experimental results alongside all data or model artifacts created during our investigation.
translated by 谷歌翻译
We consider the problem of two active particles in 2D complex flows with the multi-objective goals of minimizing both the dispersion rate and the energy consumption of the pair. We approach the problem by means of Multi Objective Reinforcement Learning (MORL), combining scalarization techniques together with a Q-learning algorithm, for Lagrangian drifters that have variable swimming velocity. We show that MORL is able to find a set of trade-off solutions forming an optimal Pareto frontier. As a benchmark, we show that a set of heuristic strategies are dominated by the MORL solutions. We consider the situation in which the agents cannot update their control variables continuously, but only after a discrete (decision) time, $\tau$. We show that there is a range of decision times, in between the Lyapunov time and the continuous updating limit, where Reinforcement Learning finds strategies that significantly improve over heuristics. In particular, we discuss how large decision times require enhanced knowledge of the flow, whereas for smaller $\tau$ all a priori heuristic strategies become Pareto optimal.
translated by 谷歌翻译
Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs. However, popular explanations methods have been found to be fragile to minor perturbations of input features or model parameters. Relying on constraint relaxation techniques from non-convex optimization, we develop a method that upper-bounds the largest change an adversary can make to a gradient-based explanation via bounded manipulation of either the input features or model parameters. By propagating a compact input or parameter set as symbolic intervals through the forwards and backwards computations of the neural network we can formally certify the robustness of gradient-based explanations. Our bounds are differentiable, hence we can incorporate provable explanation robustness into neural network training. Empirically, our method surpasses the robustness provided by previous heuristic approaches. We find that our training method is the only method able to learn neural networks with certificates of explanation robustness across all six datasets tested.
translated by 谷歌翻译